3 research outputs found
Video based Object 6D Pose Estimation using Transformers
We introduce a Transformer based 6D Object Pose Estimation framework
VideoPose, comprising an end-to-end attention based modelling architecture,
that attends to previous frames in order to estimate accurate 6D Object Poses
in videos. Our approach leverages the temporal information from a video
sequence for pose refinement, along with being computationally efficient and
robust. Compared to existing methods, our architecture is able to capture and
reason from long-range dependencies efficiently, thus iteratively refining over
video sequences. Experimental evaluation on the YCB-Video dataset shows that
our approach is on par with the state-of-the-art Transformer methods, and
performs significantly better relative to CNN based approaches. Further, with a
speed of 33 fps, it is also more efficient and therefore applicable to a
variety of applications that require real-time object pose estimation. Training
code and pretrained models are available at
https://github.com/ApoorvaBeedu/VideoPoseComment: arXiv admin note: text overlap with arXiv:2111.1067
Multimodal Contrastive Learning with Hard Negative Sampling for Human Activity Recognition
Human Activity Recognition (HAR) systems have been extensively studied by the
vision and ubiquitous computing communities due to their practical applications
in daily life, such as smart homes, surveillance, and health monitoring.
Typically, this process is supervised in nature and the development of such
systems requires access to large quantities of annotated data.
However, the higher costs and challenges associated with obtaining good
quality annotations have rendered the application of self-supervised methods an
attractive option and contrastive learning comprises one such method.
However, a major component of successful contrastive learning is the
selection of good positive and negative samples.
Although positive samples are directly obtainable, sampling good negative
samples remain a challenge.
As human activities can be recorded by several modalities like camera and IMU
sensors, we propose a hard negative sampling method for multimodal HAR with a
hard negative sampling loss for skeleton and IMU data pairs.
We exploit hard negatives that have different labels from the anchor but are
projected nearby in the latent space using an adjustable concentration
parameter.
Through extensive experiments on two benchmark datasets: UTD-MHAD and MMAct,
we demonstrate the robustness of our approach forlearning strong feature
representation for HAR tasks, and on the limited data setting.
We further show that our model outperforms all other state-of-the-art methods
for UTD-MHAD dataset, and self-supervised methods for MMAct: Cross session,
even when uni-modal data are used during downstream activity recognition
Multi-Stage Based Feature Fusion of Multi-Modal Data for Human Activity Recognition
To properly assist humans in their needs, human activity recognition (HAR)
systems need the ability to fuse information from multiple modalities. Our
hypothesis is that multimodal sensors, visual and non-visual tend to provide
complementary information, addressing the limitations of other modalities. In
this work, we propose a multi-modal framework that learns to effectively
combine features from RGB Video and IMU sensors, and show its robustness for
MMAct and UTD-MHAD datasets. Our model is trained in two-stage, where in the
first stage, each input encoder learns to effectively extract features, and in
the second stage, learns to combine these individual features. We show
significant improvements of 22% and 11% compared to video only and IMU only
setup on UTD-MHAD dataset, and 20% and 12% on MMAct datasets. Through extensive
experimentation, we show the robustness of our model on zero shot setting, and
limited annotated data setting. We further compare with state-of-the-art
methods that use more input modalities and show that our method outperforms
significantly on the more difficult MMact dataset, and performs comparably in
UTD-MHAD dataset